-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test(integration): support v2 volume #2170
Conversation
WalkthroughThe changes in this pull request primarily involve modifications to volume management and testing within the Longhorn integration tests. Key updates include the introduction of new constants and parameters, specifically related to a Changes
Possibly related PRs
Suggested reviewers
Poem
Tip CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (4)
manager/integration/tests/test_basic.py (1)
5340-5340
: Simplify Boolean ExpressionThe expression
True if access_mode == "rwx" else False
can be simplified toaccess_mode == "rwx"
for better readability and conciseness.Apply this diff to simplify the expression:
- migratable=True if access_mode == "rwx" else False, + migratable=access_mode == "rwx",🧰 Tools
🪛 Ruff (0.8.0)
5340-5340: Remove unnecessary
True if ... else False
Remove unnecessary
True if ... else False
(SIM210)
manager/integration/tests/test_scheduling.py (2)
Line range hint
150-171
: IncludedataEngine
parameter in volume creationConsider adding the
dataEngine=DATA_ENGINE
parameter to thecreate_and_check_volume
call to ensure the test volume uses the correct data engine.Apply this diff:
volume = create_and_check_volume(client, volume_name) + , dataEngine=DATA_ENGINE)
Line range hint
190-215
: IncludedataEngine
parameter in volume creationConsider adding the
dataEngine=DATA_ENGINE
parameter to thecreate_and_check_volume
call for consistency and to ensure the volume uses the correct data engine intest_soft_anti_affinity_detach
.Apply this diff:
volume = create_and_check_volume(client, volume_name) + , dataEngine=DATA_ENGINE)
manager/integration/tests/common.py (1)
328-331
: Simplify conditional assignment with a ternary operatorConsider using a ternary operator for conciseness.
Apply this diff:
enable_v2 = os.environ.get('RUN_V2_TEST') -if enable_v2 == "true": - DATA_ENGINE = "v2" -else: - DATA_ENGINE = "v1" +DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"🧰 Tools
🪛 Ruff (0.8.0)
328-331: Use ternary operator
DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead ofif
-else
-blockReplace
if
-else
-block withDATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
manager/integration/tests/common.py
(16 hunks)manager/integration/tests/test_basic.py
(63 hunks)manager/integration/tests/test_recurring_job.py
(43 hunks)manager/integration/tests/test_scheduling.py
(26 hunks)manager/integration/tests/test_system_backup_restore.py
(8 hunks)test_framework/Jenkinsfile
(1 hunks)test_framework/scripts/longhorn-setup.sh
(2 hunks)
🧰 Additional context used
🪛 Shellcheck (0.10.0)
test_framework/scripts/longhorn-setup.sh
[warning] 407-407: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
🪛 Ruff (0.8.0)
manager/integration/tests/common.py
328-331: Use ternary operator DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead of if
-else
-block
Replace if
-else
-block with DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
manager/integration/tests/test_basic.py
5340-5340: Remove unnecessary True if ... else False
Remove unnecessary True if ... else False
(SIM210)
🔇 Additional comments (71)
manager/integration/tests/test_recurring_job.py (37)
Line range hint 158-189
: LGTM!
The test case looks good. It creates a volume, attaches it, writes data, sets up recurring snapshot and backup jobs, and verifies that the expected number of snapshots and backups are created after waiting for the scheduled times.
235-236
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
290-291
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
328-330
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
Line range hint 440-441
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
494-495
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
548-549
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
677-678
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
842-843
: LGTM!
The test case looks good. It verifies the behavior of recurring backup jobs when the volume is detached unexpectedly during the backup process. The volume should still be able to reattach and complete the backup.
992-993
: LGTM!
The test case looks good. It creates two volumes, sets up recurring snapshot and backup jobs with different schedules and groups, and verifies that the expected number of snapshots and backups are created on each volume after waiting for the scheduled times.
1023-1026
: LGTM!
The volumes are being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1083-1084
: LGTM!
The test case looks good. It creates a volume, sets up a recurring backup job, and verifies that the job runs as expected with the volume attached to the default node.
1106-1108
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1128-1129
: LGTM!
The test case looks good. It creates multiple recurring jobs, verifies they are created successfully, then deletes them one by one and checks the corresponding cron jobs are removed as expected.
1169-1170
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1276-1277
: LGTM!
The test case looks good. It verifies that the recurring job labels are correctly added and removed from the volume when the jobs are added to or removed from the volume.
1323-1324
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1424-1425
: LGTM!
The test case looks good. It verifies that when a recurring job and job group use the same name, adding/removing the job or group updates the volume labels correctly.
1455-1460
: LGTM!
The volumes are being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1509-1510
: LGTM!
The test case looks good. It creates two volumes, sets up recurring backup jobs, and verifies that the jobs run as expected with the volumes attached to the default node. It also checks adding a job to one volume doesn't impact the other.
1543-1544
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1581-1582
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1605-1606
: LGTM!
The test case looks good. It creates two volumes, sets up a recurring snapshot job, and verifies that the job creates snapshots on both volumes as expected.
1633-1636
: LGTM!
The volumes are being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1681-1682
: LGTM!
The test case looks good. It creates a volume, sets up a recurring snapshot deletion job, and verifies that the job deletes old snapshots and retains the configured number of most recent snapshots.
1717-1718
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1811-1812
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1880-1881
: LGTM!
The volume is being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
1943-1944
: LGTM!
The test case looks good. It creates two volumes, sets up a recurring backup job, writes data to the volumes, and verifies that the job creates backups of both volumes as expected.
1969-1972
: LGTM!
The volumes are being created with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
2011-2012
: LGTM!
The test case looks good. It verifies that recurring jobs and job groups are correctly restored from a backup.
2081-2082
: LGTM!
The volume is being created with the correct parameters including the dataEngine
based on the DATA_ENGINE
variable.
2116-2117
: LGTM!
The volume is being created from the backup with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
2127-2128
: LGTM!
The volume is being created from the backup with the correct dataEngine
parameter based on the DATA_ENGINE
variable.
2135-2136
: LGTM!
The test case looks good. It verifies that the recurring filesystem trim job runs successfully on a volume.
2213-2214
: LGTM!
The test case looks good. It verifies that recurring job labels on the PVC are correctly synced to the volume, and are cleaned up when the corresponding recurring jobs are deleted.
2324-2325
: LGTM!
The test case looks good. It verifies that the recurring-job.longhorn.io/source
label on the PVC controls whether the PVC recurring job labels are synced to the volume or not.
manager/integration/tests/test_scheduling.py (21)
92-92
: Import DATA_ENGINE
Adding from common import DATA_ENGINE
correctly imports the DATA_ENGINE
constant required for specifying the data engine in volume creation.
150-150
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_soft_anti_affinity_scheduling
as a test for v2 volumes.
190-190
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_soft_anti_affinity_detach
as a test for v2 volumes.
236-236
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_hard_anti_affinity_scheduling
as a test for v2 volumes.
286-286
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_hard_anti_affinity_detach
as a test for v2 volumes.
340-340
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_hard_anti_affinity_live_rebuild
as a test for v2 volumes.
389-389
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_hard_anti_affinity_offline_rebuild
as a test for v2 volumes.
488-488
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_replica_auto_balance_node_least_effort
as a test for v2 volumes.
620-620
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_replica_auto_balance_node_best_effort
as a test for v2 volumes.
1107-1108
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
in the client.create_volume
call ensures the volume is created with the correct data engine in replica_auto_balance_with_data_locality_test
.
1292-1292
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_data_locality_basic
as a test for v2 volumes.
1527-1528
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine in the test case.
1600-1601
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine in the test case.
1687-1688
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine in the test case.
1904-1905
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine in test_soft_anti_affinity_scheduling_volume_enable
.
1957-1958
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine in test_soft_anti_affinity_scheduling_volume_disable
.
1988-1988
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_data_locality_strict_local_node_affinity
as a test for v2 volumes.
2098-2099
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine when calling client.create_volume
.
2230-2231
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine in test_global_disk_soft_anti_affinity
.
2318-2319
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine when calling client.create_volume
.
2401-2402
: Specify dataEngine
in volume creation
Including dataEngine=DATA_ENGINE
ensures the volume is created with the correct data engine in test_volume_disk_soft_anti_affinity
.
manager/integration/tests/common.py (6)
221-221
: Import SETTING_V2_DATA_ENGINE
Adding SETTING_V2_DATA_ENGINE
to the imports allows setting the v2 data engine configuration.
1580-1582
: Add dataEngine
to storage class parameters
Including dataEngine
in the storage class manifest when DATA_ENGINE
is 'v2' ensures volumes are provisioned with the correct data engine.
1806-1809
: Ensure consistent storage of default replica directory
The check for the existence of DEFAULT_REPLICA_DIRECTORY
is skipped when RUN_V2_TEST
is "true"
, which is appropriate since v2 data engine may not use the same storage path.
3610-3617
: Set default data engine for tests
Initializing DATA_ENGINE
based on the RUN_V2_TEST
environment variable correctly configures the data engine for the tests.
Line range hint 3627-3643
: Configure default disk path and disk type
Adjusting default_disk_path
and disk_type
based on the data engine ensures that the disks are set up correctly for both v1 and v2 data engines.
6114-6115
: Include dataEngine
parameter in create_volume
function
The addition of the dataEngine
parameter to the create_volume
function allows specifying the data engine when creating volumes, enhancing flexibility.
manager/integration/tests/test_system_backup_restore.py (7)
38-38
: Import DATA_ENGINE
Adding from common import DATA_ENGINE
imports the necessary constant for specifying the data engine in volume operations.
48-48
: Add v2 volume test marker
The use of @pytest.mark.v2_volume_test
appropriately marks test_system_backup_and_restore
as a test for v2 volumes.
Line range hint 76-106
: Add test for system backup and restore with volume data
Introducing test_system_backup_and_restore_volume_with_data
provides coverage for backup and restore processes involving volumes with data, ensuring data integrity after restoration.
Line range hint 210-241
: Add test for volume backup policy if-not-present
The test_system_backup_with_volume_backup_policy_if_not_present
function correctly tests the behavior when the volume backup policy is set to if-not-present
, ensuring backups are created only when necessary.
Line range hint 267-302
: Add test for volume backup policy always
The test_system_backup_with_volume_backup_policy_always
function appropriately tests that backups are always created regardless of existing backups when the policy is set to always
.
Line range hint 328-358
: Add test for volume backup policy disabled
The test_system_backup_with_volume_backup_policy_disabled
function verifies that no backups are created when the policy is set to disabled
, ensuring correct behavior.
Line range hint 360-383
: Add test for system backup deletion with name prefix overlap
The test_system_backup_delete_when_other_system_backup_using_name_as_prefix
function ensures that deleting a system backup does not unintentionally delete other backups with similar name prefixes.
--env TF_VAR_extra_block_device=${RUN_V2_TEST} \ | ||
--env RUN_V2_TEST=${RUN_V2_TEST} \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Define the RUN_V2_TEST
Variable Before Use
The variable RUN_V2_TEST
is used here but has not been defined above. Please define RUN_V2_TEST
similarly to other parameters to prevent potential runtime errors.
Apply this diff to define RUN_V2_TEST
:
+def RUN_V2_TEST = params.RUN_V2_TEST ? params.RUN_V2_TEST : false
Committable suggestion skipped: line range outside the PR's diff.
yq e -i 'select(.spec.containers[0].env != null).spec.containers[0].env += {"name": "RUN_V2_TEST", "value": "'${TF_VAR_extra_block_device}'"}' "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use of Undefined Variable TF_VAR_extra_block_device
The variable TF_VAR_extra_block_device
is referenced but not assigned before use, which may cause issues during script execution.
Ensure that TF_VAR_extra_block_device
is assigned a value before it's used. This can be done by defining it at the beginning of the script or passing it as an environment variable.
Also applies to: 475-476
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 407-407: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
@derekbit @shuo-wu It would be appreciated if you could help review this PR and longhorn/longhorn#9760 |
e009329
to
88a619a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (6)
manager/integration/tests/test_csi.py (2)
Line range hint
254-409
: Parameterizingdevice_path
in block volume tests to avoid hardcodingIn the functions
test_csi_block_volume
andcreate_and_verify_block_volume
, thedevice_path
is hardcoded to"/dev/longhorn/longhorn-test-blk"
. This could lead to conflicts if multiple tests run concurrently or if the same path is used elsewhere. Consider parameterizingdevice_path
or generating a unique path for each test instance to avoid potential conflicts and improve test isolation.
Line range hint
684-722
: Address theFIXME
: Add data validation intest_xfs_pv_existing_volume
The test
test_xfs_pv_existing_volume
includes aFIXME
comment indicating that data should be written in step 4 and validated in step 5 to ensure the disk isn't reformatted. Implementing this will enhance the test by verifying data integrity across formatting operations.Would you like assistance in adding the data write and validation steps to this test?
manager/integration/tests/common.py (3)
326-332
: Simplify data engine version determinationThe code can be simplified using a ternary operator. Also, consider adding validation for the environment variable value.
-enable_v2 = os.environ.get('RUN_V2_TEST') -if enable_v2 == "true": - DATA_ENGINE = "v2" -else: - DATA_ENGINE = "v1" +enable_v2 = os.environ.get('RUN_V2_TEST', '').lower() +if enable_v2 not in ['true', 'false']: + raise ValueError("RUN_V2_TEST must be 'true' or 'false'") +DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"🧰 Tools
🪛 Ruff (0.8.0)
328-331: Use ternary operator
DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead ofif
-else
-blockReplace
if
-else
-block withDATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
3615-3622
: Reduce code duplication in disk configurationThe environment variable check is duplicated across multiple functions. Consider extracting this logic into a helper function.
+def get_disk_config(): + enable_v2 = os.environ.get('RUN_V2_TEST') == "true" + return { + 'path': BLOCK_DEV_PATH if enable_v2 else DEFAULT_DISK_PATH, + 'type': "block" if enable_v2 else "filesystem" + } def reset_disks_for_all_nodes(client): - enable_v2 = os.environ.get('RUN_V2_TEST') - if enable_v2 == "true": - default_disk_path = BLOCK_DEV_PATH - disk_type = "block" - else: - default_disk_path = DEFAULT_DISK_PATH - disk_type = "filesystem" + disk_config = get_disk_config() + default_disk_path = disk_config['path'] + disk_type = disk_config['type']
3718-3731
: Consolidate v1/v2 data engine settings updateThe code handling v1/v2 data engine settings can be simplified to reduce duplication.
- enable_v2 = os.environ.get('RUN_V2_TEST') == "true" - v1_setting_value = "false" if enable_v2 else "true" - v2_setting_value = "true" if enable_v2 else "false" - - if setting_name == "v1-data-engine": - setting = client.by_id_setting(SETTING_V1_DATA_ENGINE) - client.update(setting, value=v1_setting_value) - continue - - if setting_name == "v2-data-engine": - setting = client.by_id_setting(SETTING_V2_DATA_ENGINE) - client.update(setting, value=v2_setting_value) - continue + if setting_name in ["v1-data-engine", "v2-data-engine"]: + enable_v2 = os.environ.get('RUN_V2_TEST') == "true" + setting_value = str(enable_v2).lower() if setting_name == "v2-data-engine" \ + else str(not enable_v2).lower() + setting = client.by_id_setting(setting_name) + client.update(setting, value=setting_value) + continuemanager/integration/tests/test_basic.py (1)
5340-5340
: Simplify unnecessary ternary expressionThe ternary expression can be simplified since it's already returning a boolean value.
-migratable=True if access_mode == "rwx" else False, +migratable=access_mode == "rwx",🧰 Tools
🪛 Ruff (0.8.0)
5340-5340: Remove unnecessary
True if ... else False
Remove unnecessary
True if ... else False
(SIM210)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (8)
manager/integration/tests/common.py
(16 hunks)manager/integration/tests/test_basic.py
(63 hunks)manager/integration/tests/test_csi.py
(10 hunks)manager/integration/tests/test_recurring_job.py
(43 hunks)manager/integration/tests/test_scheduling.py
(26 hunks)manager/integration/tests/test_system_backup_restore.py
(8 hunks)test_framework/Jenkinsfile
(1 hunks)test_framework/scripts/longhorn-setup.sh
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- test_framework/Jenkinsfile
🧰 Additional context used
🪛 Shellcheck (0.10.0)
test_framework/scripts/longhorn-setup.sh
[warning] 407-407: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
🪛 Ruff (0.8.0)
manager/integration/tests/common.py
328-331: Use ternary operator DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead of if
-else
-block
Replace if
-else
-block with DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
manager/integration/tests/test_basic.py
5340-5340: Remove unnecessary True if ... else False
Remove unnecessary True if ... else False
(SIM210)
🔇 Additional comments (32)
manager/integration/tests/test_csi.py (8)
40-40
: Importing DATA_ENGINE
from the common
module
The addition of from common import DATA_ENGINE
correctly imports the DATA_ENGINE
variable, which is necessary for specifying the data engine version in volume operations.
59-60
: Adding dataEngine=DATA_ENGINE
parameter to volume creation
Including dataEngine=DATA_ENGINE
when creating the volume ensures that the correct data engine is specified, aligning with the enhancements for v2 volume tests.
106-106
: Marking test test_csi_mount
as a v2 volume test
The decorator @pytest.mark.v2_volume_test
appropriately marks test_csi_mount
for inclusion in the v2 volume testing suite.
147-147
: Marking test test_csi_io
as a v2 volume test
The @pytest.mark.v2_volume_test
decorator correctly includes test_csi_io
in the v2 volume testing suite.
199-199
: Marking test test_csi_backup
as a v2 volume test
The decorator appropriately marks test_csi_backup
for v2 volume testing.
Line range hint 645-682
: Implement data validation in test_xfs_pv
The test test_xfs_pv
writes data to the pod and reads it back for validation, which is effective. Ensure that any data written is properly verified to confirm that the XFS filesystem functions correctly.
Line range hint 802-924
: Reviewing test_allow_volume_creation_with_degraded_availability_csi
The test test_allow_volume_creation_with_degraded_availability_csi
effectively verifies volume creation under degraded availability conditions with CSI. The steps are comprehensive and align with the test objectives.
Line range hint 925-1002
: Verifying minimal volume size handling in test_csi_minimal_volume_size
The test test_csi_minimal_volume_size
correctly checks behavior when requesting a PVC size below the minimal allowed size. It ensures that the PVC's requested size is adjusted appropriately and that the final capacity aligns with expectations.
manager/integration/tests/test_system_backup_restore.py (8)
38-38
: Importing DATA_ENGINE
from the common
module
Adding from common import DATA_ENGINE
correctly imports the DATA_ENGINE
variable needed for specifying the data engine in backup and restore operations.
48-49
: Marking test test_system_backup_and_restore
as a v2 volume test
The decorators @pytest.mark.v2_volume_test
and @pytest.mark.system_backup_restore
appropriately mark test_system_backup_and_restore
for inclusion in the v2 volume testing suite.
76-77
: Marking test test_system_backup_and_restore_volume_with_data
as a v2 volume test
The decorators correctly include this test in the v2 volume testing suite.
210-212
: Marking test test_system_backup_with_volume_backup_policy_if_not_present
as a v2 volume test
The test is appropriately marked for v2 volume testing.
267-269
: Marking test test_system_backup_with_volume_backup_policy_always
as a v2 volume test
The decorators correctly include this test in the v2 volume testing suite.
304-305
: Including dataEngine=DATA_ENGINE
in DR volume creation
Adding dataEngine=DATA_ENGINE
ensures the DR volume is created with the correct data engine, which is essential for consistency in v2 volume tests.
328-330
: Marking test test_system_backup_with_volume_backup_policy_disabled
as a v2 volume test
The test is appropriately marked for inclusion in v2 volume testing.
360-362
: Marking test test_system_backup_delete_when_other_system_backup_using_name_as_prefix
as a v2 volume test
The decorators correctly include this test in the v2 volume testing suite.
test_framework/scripts/longhorn-setup.sh (2)
407-408
:
Use of Undefined Variable TF_VAR_extra_block_device
The variable TF_VAR_extra_block_device
is referenced but may not be assigned before use, which can cause issues during script execution.
Apply this diff to ensure that TF_VAR_extra_block_device
is defined before use:
+ TF_VAR_extra_block_device=${TF_VAR_extra_block_device:-""}
Alternatively, ensure that TF_VAR_extra_block_device
is assigned a value before it's used, possibly at the beginning of the script or by exporting it as an environment variable.
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 407-407: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
475-476
:
Use of Undefined Variable TF_VAR_extra_block_device
The variable TF_VAR_extra_block_device
is referenced but may not be assigned before use, which can cause issues during script execution.
Apply this diff to ensure that TF_VAR_extra_block_device
is defined before use:
+ TF_VAR_extra_block_device=${TF_VAR_extra_block_device:-""}
Alternatively, make sure TF_VAR_extra_block_device
is assigned a value before it's used.
manager/integration/tests/test_recurring_job.py (6)
84-84
: Importing DATA_ENGINE
from the common
module
Adding from common import DATA_ENGINE
correctly imports the DATA_ENGINE
variable needed for specifying the data engine in recurring job tests.
158-159
: Marking test test_recurring_job
as a v2 volume test
The decorators @pytest.mark.v2_volume_test
and @pytest.mark.recurring_job
correctly include test_recurring_job
in the v2 volume testing suite.
290-291
: Marking test test_recurring_job_in_volume_creation
as a v2 volume test
The test is appropriately marked for v2 volume testing.
354-355
: Marking test test_recurring_job_duplicated
as a v2 volume test
This test function is correctly marked for inclusion in the v2 volume testing suite.
382-385
: Marking test test_recurring_job_in_storageclass
as a v2 volume test
The decorators appropriately include this test in the v2 volume testing suite.
457-459
: Marking test test_recurring_job_labels
as a v2 volume test
The test is correctly marked for v2 volume testing.
manager/integration/tests/test_scheduling.py (6)
92-92
: Importing DATA_ENGINE
from the common
module
Adding from common import DATA_ENGINE
correctly imports the DATA_ENGINE
variable needed for specifying the data engine in scheduling tests.
150-151
: Marking test test_soft_anti_affinity_scheduling
as a v2 volume test
The decorators @pytest.mark.v2_volume_test
appropriately mark test_soft_anti_affinity_scheduling
for inclusion in the v2 volume testing suite.
1904-1905
: Including replicaSoftAntiAffinity
and dataEngine
parameters in volume creation
Specifying replicaSoftAntiAffinity="enabled"
and dataEngine=DATA_ENGINE
in the volume creation ensures that the volume settings are correctly applied for the test test_soft_anti_affinity_scheduling_volume_enable
.
1957-1958
: Setting replicaSoftAntiAffinity
and dataEngine
parameters in volume creation
In test_soft_anti_affinity_scheduling_volume_disable
, specifying replicaSoftAntiAffinity="disabled"
and dataEngine=DATA_ENGINE
ensures correct testing of the behavior when soft anti-affinity is disabled.
2230-2231
: Specifying dataEngine=DATA_ENGINE
in volume creation
Including dataEngine=DATA_ENGINE
in the volume creation parameters ensures the volume uses the correct data engine version in the test for disk soft anti-affinity.
2401-2402
: Setting replicaDiskSoftAntiAffinity
and dataEngine
in volume creation
The volume is created with replicaDiskSoftAntiAffinity="disabled"
and dataEngine=DATA_ENGINE
to properly test the disk soft anti-affinity settings in test_volume_disk_soft_anti_affinity
.
manager/integration/tests/common.py (1)
441-442
: LGTM: Data engine parameter additions
The data engine parameter additions to volume creation and storage class configuration are consistent and well-implemented.
Also applies to: 561-561, 1580-1582
manager/integration/tests/test_basic.py (1)
107-107
: LGTM! V2 volume test support added correctly
The changes appropriately add v2 volume support by including the dataEngine parameter in volume creation calls. The implementation is consistent across all test functions.
Also applies to: 272-280
BLOCK_DEV_PATH = "/dev/xvdh" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider making block device path configurable
The block device path /dev/xvdh
is hardcoded. Consider making this configurable through an environment variable to support different environments and configurations.
-BLOCK_DEV_PATH = "/dev/xvdh"
+BLOCK_DEV_PATH = os.environ.get('LONGHORN_BLOCK_DEVICE_PATH', '/dev/xvdh')
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
BLOCK_DEV_PATH = "/dev/xvdh" | |
BLOCK_DEV_PATH = os.environ.get('LONGHORN_BLOCK_DEVICE_PATH', '/dev/xvdh') | |
88a619a
to
18e525d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (8)
manager/integration/tests/test_csi.py (2)
Line range hint
645-671
: Address theFIXME
intest_xfs_pv_existing_volume
The test function
test_xfs_pv_existing_volume
includes aFIXME
comment indicating that data should be written in step 4 and validated in step 5 to ensure the disk isn't reformatted. Implementing this step will enhance the test by verifying data persistence after formatting and pod creation.Would you like assistance in updating the test to include writing data before formatting and validating it after pod creation?
Line range hint
802-923
: Improve reliability intest_allow_volume_creation_with_degraded_availability_csi
The test uses a workaround for potential flakiness when verifying data after pod creation. To enhance test reliability, consider implementing a more robust waiting mechanism to ensure the pod is fully ready before performing the data checksum validation. This could involve checking the pod's status or incorporating readiness probes.
manager/integration/tests/test_csi_snapshotter.py (2)
Line range hint
415-534
: Fix parameter order inrestore_csi_volume_snapshot
callsIn
test_csi_volumesnapshot_basic
, when callingrestore_csi_volume_snapshot
, thestorage_class
parameter is missing, leading to a mismatch with the updated function signature.Apply this diff to include the missing parameter:
restore_csi_volume_snapshot(core_api, client, csivolsnap, restore_pvc_name, - restore_pvc_size) + restore_pvc_size, + storage_class)
Line range hint
555-638
: Correct function call parameters intest_csi_volumesnapshot_restore_existing_backup
The call to
restore_csi_volume_snapshot
is missing thestorage_class
parameter, which is required due to the updated function signature.Apply this diff to fix the parameter mismatch:
restore_csi_volume_snapshot(core_api, client, csivolsnap, restore_pvc_name, - restore_pvc_size) + restore_pvc_size, + storage_class)manager/integration/pytest.ini (1)
21-21
: Add description for the new markerv2_volume_test
To improve clarity, consider adding a brief description for the newly added
v2_volume_test
marker. This helps other developers understand the purpose of the marker when running tests.Apply this diff to include the description:
markers = volume_backup_restore + v2_volume_test: Marks tests specific to version 2 of volume functionalities
manager/integration/tests/test_system_backup_restore.py (1)
Line range hint
210-258
: Enhance clarity intest_system_backup_with_volume_backup_policy_if_not_present
The test steps are well-defined. To improve readability, consider updating the docstring to clearly state the expected behavior when creating system backups with the
if-not-present
policy, especially regarding backup counts after each operation.manager/integration/tests/common.py (1)
326-332
: Simplify data engine selection using ternary operatorThe data engine selection logic can be simplified using a ternary operator.
-enable_v2 = os.environ.get('RUN_V2_TEST') -if enable_v2 == "true": - DATA_ENGINE = "v2" -else: - DATA_ENGINE = "v1" +enable_v2 = os.environ.get('RUN_V2_TEST') +DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"🧰 Tools
🪛 Ruff (0.8.2)
328-331: Use ternary operator
DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead ofif
-else
-blockReplace
if
-else
-block withDATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
manager/integration/tests/test_basic.py (1)
5352-5352
: Simplify boolean expressionThe boolean expression can be simplified.
-migratable=True if access_mode == "rwx" else False, +migratable=access_mode == "rwx",🧰 Tools
🪛 Ruff (0.8.2)
5352-5352: Remove unnecessary
True if ... else False
Remove unnecessary
True if ... else False
(SIM210)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
manager/integration/pytest.ini
(1 hunks)manager/integration/tests/common.py
(16 hunks)manager/integration/tests/test_basic.py
(63 hunks)manager/integration/tests/test_csi.py
(10 hunks)manager/integration/tests/test_csi_snapshotter.py
(17 hunks)manager/integration/tests/test_recurring_job.py
(43 hunks)manager/integration/tests/test_scheduling.py
(26 hunks)manager/integration/tests/test_system_backup_restore.py
(8 hunks)test_framework/Jenkinsfile
(1 hunks)test_framework/scripts/longhorn-setup.sh
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- test_framework/Jenkinsfile
🧰 Additional context used
🪛 Shellcheck (0.10.0)
test_framework/scripts/longhorn-setup.sh
[warning] 442-442: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
🪛 Ruff (0.8.2)
manager/integration/tests/common.py
328-331: Use ternary operator DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead of if
-else
-block
Replace if
-else
-block with DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
manager/integration/tests/test_basic.py
5352-5352: Remove unnecessary True if ... else False
Remove unnecessary True if ... else False
(SIM210)
🔇 Additional comments (17)
manager/integration/tests/test_csi.py (2)
59-60
: Ensure dataEngine
parameter is supported in create_volume
The addition of dataEngine=DATA_ENGINE
when calling cli.create_volume
in the create_pv_storage
function is appropriate if the create_volume
method accepts the dataEngine
parameter. Please verify that dataEngine
is a valid parameter for cli.create_volume
and that DATA_ENGINE
is correctly defined.
Line range hint 925-992
: Verify minimal volume size enforcement in test_csi_minimal_volume_size
The test verifies that requesting a PVC size smaller than the minimum allowed size results in provisioning a volume with the minimal size. Ensure that this behavior aligns with the system's expected minimal volume size policy. The assertions correctly check that the requested size is 5Mi
and the actual provisioned size is 10Mi
.
manager/integration/tests/test_csi_snapshotter.py (2)
Line range hint 363-380
: Confirm correct usage of storage_class
in restore_csi_volume_snapshot
The function restore_csi_volume_snapshot
now includes storage_class
as a parameter and creates the storage class using create_storage_class(storage_class)
. Ensure that storage_class
is properly defined and passed when this function is called, and that DEFAULT_STORAGECLASS_NAME
is the intended name for the storage class in your environment.
Line range hint 659-713
: Ensure consistent passing of storage_class
in test_csi_snapshot_with_bak_param
In test_csi_snapshot_with_bak_param
, confirm that storage_class
is passed to test_csi_volumesnapshot_basic
to match the updated function signature and ensure proper test execution.
manager/integration/tests/test_system_backup_restore.py (3)
38-38
: Confirm DATA_ENGINE
import and usage
The import of DATA_ENGINE
from common
is appropriate. Verify that DATA_ENGINE
is correctly defined and used in subsequent volume creation calls that require this parameter.
Line range hint 76-107
: Ensure data integrity in test_system_backup_and_restore_volume_with_data
The test accurately restores a volume with data and checks the data integrity post-restoration. Verify that the data written before backup matches the data read after restoration to ensure correctness.
Line range hint 267-324
: Handle DR volumes in test_system_backup_with_volume_backup_policy_always
The comment states that system backups should skip creating backups for DR volumes. Verify that the test ensures DR volumes are not backed up when the always
policy is used, and that volume backups are created only for the intended volumes.
test_framework/scripts/longhorn-setup.sh (2)
515-517
: Fix undefined variable TF_VAR_extra_block_device
The script uses TF_VAR_extra_block_device
variable without ensuring it's defined first.
441-443
:
Fix undefined variable TF_VAR_extra_block_device
The script uses TF_VAR_extra_block_device
variable without ensuring it's defined first. This could cause script failures.
Add a check and default value for the variable:
+# Set default value for extra block device if not defined
+: "${TF_VAR_extra_block_device:=""}"
+
## for v2 volume test
yq e -i 'select(.spec.containers[0].env != null).spec.containers[0].env += {"name": "RUN_V2_TEST", "value": "'${TF_VAR_extra_block_device}'"}' "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}"
Likely invalid or redundant comment.
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 442-442: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
manager/integration/tests/test_recurring_job.py (1)
84-84
: LGTM! Consistent integration of v2 volume testing
The changes properly integrate v2 volume testing support by:
- Adding DATA_ENGINE import
- Consistently using dataEngine parameter in volume creation calls
- Marking relevant tests with @pytest.mark.v2_volume_test
Also applies to: 235-236, 329-330, 494-495, 1023-1024, 1025-1026, 1106-1107, 1169-1170, 1323-1324, 1455-1456, 1457-1458, 1459-1460, 1543-1544, 1581-1582, 1633-1634, 1635-1636, 1717-1718, 1811-1812, 1969-1970, 1971-1972
manager/integration/tests/test_scheduling.py (1)
92-92
: LGTM! Consistent integration of v2 volume testing
The changes properly integrate v2 volume testing support following the same pattern as other test files:
- Adding DATA_ENGINE import
- Consistently using dataEngine parameter in volume creation calls
- Marking relevant tests with @pytest.mark.v2_volume_test
Also applies to: 2098-2099, 2230-2231, 2318-2319, 2401-2402
manager/integration/tests/common.py (3)
221-221
: LGTM! V2 data engine integration looks good
The integration of v2 data engine support is well implemented:
- Constants and settings follow naming conventions
- Clear separation of v1 vs v2 configuration
- Proper handling of disk paths and storage reservation
- Good validation and setting of data engine values
Also applies to: 3615-3622, 3647-3648, 3660-3664, 3718-3731
441-442
: LGTM! Volume creation parameter updates are consistent
The addition of dataEngine parameter to volume creation functions is consistent and well integrated with the existing code.
Also applies to: 561-561, 6147-6148, 6416-6417
89-90
:
Consider making block device path configurable
The block device path /dev/xvdh
is hardcoded. Consider making this configurable through an environment variable to support different environments and configurations.
-BLOCK_DEV_PATH = "/dev/xvdh"
+BLOCK_DEV_PATH = os.environ.get('LONGHORN_BLOCK_DEVICE_PATH', '/dev/xvdh')
Likely invalid or redundant comment.
manager/integration/tests/test_basic.py (3)
107-107
: LGTM: Clean import addition
The addition of DATA_ENGINE import is necessary for v2 volume testing support.
272-276
: Volume creation parameters updated correctly
The volume creation parameters have been updated to include dataEngine=DATA_ENGINE consistently.
127-127
: Well-structured test coverage for v2 volumes
The test suite has been properly updated to support v2 volumes while maintaining comprehensive coverage of:
- Basic volume operations
- Backup and restore functionality
- Error handling and edge cases
- Filesystem operations
- Access mode configurations
Also applies to: 642-642, 1006-1006, 1452-1452, 3204-3204, 3277-3277, 4386-4386, 4507-4507, 4575-4575, 5713-5713
Can kernel modules and hugepage be configured and loaded before running test cases? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
07656e0
to
902e07a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (4)
manager/integration/tests/test_csi_snapshotter.py (4)
Line range hint
363-383
: Missing 'storage_class' argument in function callsThe function
restore_csi_volume_snapshot
now requires thestorage_class
parameter. However, some calls to this function do not provide this parameter, which will result in aTypeError
.Please ensure that all calls to
restore_csi_volume_snapshot
include thestorage_class
argument. For example, update the function definition to provide a default value or update all function calls accordingly.Option 1: Update function calls
Example of updating a function call missing the
storage_class
parameter:def test_csi_volumesnapshot_backing_image_basic(client, core_api, csi_pv, pod_make, pvc, request, volume_name, volumesnapshotclass, volumesnapshot): ... restore_csi_volume_snapshot(core_api, client, csivolsnap, restore_pvc_name, restore_pvc_size, + storage_class, wait_for_restore=False)
Option 2: Provide a default value in function definition
-def restore_csi_volume_snapshot(core_api, client, csivolsnap, pvc_name, pvc_request_storage_size, storage_class, wait_for_restore=True): +def restore_csi_volume_snapshot(core_api, client, csivolsnap, pvc_name, pvc_request_storage_size, storage_class=None, wait_for_restore=True):If
storage_class
is not essential for some tests, providing a default value will prevent breaking existing function calls.
Line range hint
1165-1167
: Function call missing 'storage_class' argumentIn
test_csi_volumesnapshot_backing_image_basic
, the call torestore_csi_volume_snapshot
lacks the requiredstorage_class
parameter, which will result in aTypeError
.Update the function call to include
storage_class
:restore_csi_volume_snapshot(core_api, client, csivolsnap, restore_pvc_name, restore_pvc_size, + storage_class, wait_for_restore=False)
Line range hint
1231-1233
: Function call missing 'storage_class' argumentIn
test_csi_volumesnapshot_restore_pre_provision_backing_image
, the call torestore_csi_volume_snapshot
is missing thestorage_class
parameter.Include
storage_class
in the function call:restore_csi_volume_snapshot(core_api, client, csivolsnap, restore_pvc_name, restore_pvc_size, + storage_class)
Line range hint
1275-1277
: Function call missing 'storage_class' argumentIn
test_csi_volumesnapshot_restore_on_demand_backing_image
, thestorage_class
parameter is not provided in the call torestore_csi_volume_snapshot
.Add
storage_class
to the function call:restore_csi_volume_snapshot(core_api, client, csivolsnap, restore_pvc_name, restore_pvc_size, + storage_class)
♻️ Duplicate comments (1)
test_framework/scripts/longhorn-setup.sh (1)
454-455
:⚠️ Potential issueUse of undefined variable 'TF_VAR_extra_block_device'
The variable 'TF_VAR_extra_block_device' is referenced but not assigned before use, which may cause issues during script execution.
Ensure that 'TF_VAR_extra_block_device' is assigned a value before it's used. You can assign it at the beginning of the script or ensure it's passed as an environment variable.
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 454-454: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
🧹 Nitpick comments (4)
manager/integration/tests/common.py (3)
326-332
: Consider using a ternary operator for better readabilityThe data engine selection can be simplified using a ternary operator.
-enable_v2 = os.environ.get('RUN_V2_TEST') -if enable_v2 == "true": - DATA_ENGINE = "v2" -else: - DATA_ENGINE = "v1" +enable_v2 = os.environ.get('RUN_V2_TEST') +DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"🧰 Tools
🪛 Ruff (0.8.2)
328-331: Use ternary operator
DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead ofif
-else
-blockReplace
if
-else
-block withDATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
3615-3622
: Add comment explaining disk configuration differencesThe disk configuration varies between v1 and v2 data engines. Consider adding a comment to explain why different paths and types are used.
enable_v2 = os.environ.get('RUN_V2_TEST') +# v2 data engine requires block device while v1 uses filesystem if enable_v2 == "true": default_disk_path = BLOCK_DEV_PATH disk_type = "block" else: default_disk_path = DEFAULT_DISK_PATH disk_type = "filesystem"
3718-3731
: Consider refactoring data engine settings updateThe v1/v2 data engine settings update logic contains some duplication. Consider refactoring to make it more concise.
-enable_v2 = os.environ.get('RUN_V2_TEST') == "true" -v1_setting_value = "false" if enable_v2 else "true" -v2_setting_value = "true" if enable_v2 else "false" - -if setting_name == "v1-data-engine": - setting = client.by_id_setting(SETTING_V1_DATA_ENGINE) - client.update(setting, value=v1_setting_value) - continue - -if setting_name == "v2-data-engine": - setting = client.by_id_setting(SETTING_V2_DATA_ENGINE) - client.update(setting, value=v2_setting_value) - continue +enable_v2 = os.environ.get('RUN_V2_TEST') == "true" +data_engine_settings = { + "v1-data-engine": str(not enable_v2).lower(), + "v2-data-engine": str(enable_v2).lower() +} +if setting_name in data_engine_settings: + setting = client.by_id_setting(setting_name) + client.update(setting, value=data_engine_settings[setting_name]) + continuemanager/integration/tests/test_basic.py (1)
5352-5352
: Simplify the ternary operationThe ternary operation can be simplified by directly using the boolean comparison.
-migratable=True if access_mode == "rwx" else False, +migratable=access_mode == "rwx",🧰 Tools
🪛 Ruff (0.8.2)
5352-5352: Remove unnecessary
True if ... else False
Remove unnecessary
True if ... else False
(SIM210)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
manager/integration/pytest.ini
(1 hunks)manager/integration/tests/common.py
(16 hunks)manager/integration/tests/test_basic.py
(63 hunks)manager/integration/tests/test_csi.py
(10 hunks)manager/integration/tests/test_csi_snapshotter.py
(17 hunks)manager/integration/tests/test_recurring_job.py
(43 hunks)manager/integration/tests/test_scheduling.py
(26 hunks)manager/integration/tests/test_system_backup_restore.py
(8 hunks)test_framework/Jenkinsfile
(1 hunks)test_framework/scripts/longhorn-setup.sh
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- test_framework/Jenkinsfile
- manager/integration/pytest.ini
🧰 Additional context used
🪛 Shellcheck (0.10.0)
test_framework/scripts/longhorn-setup.sh
[warning] 454-454: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
🪛 Ruff (0.8.2)
manager/integration/tests/common.py
328-331: Use ternary operator DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead of if
-else
-block
Replace if
-else
-block with DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
manager/integration/tests/test_basic.py
5352-5352: Remove unnecessary True if ... else False
Remove unnecessary True if ... else False
(SIM210)
🔇 Additional comments (14)
manager/integration/tests/test_csi.py (3)
40-40
: Import of 'DATA_ENGINE' is appropriate
The import statement correctly imports 'DATA_ENGINE' from 'common'.
59-60
: Addition of 'dataEngine=DATA_ENGINE' in 'create_volume'
The 'dataEngine' parameter is correctly added to the 'cli.create_volume' function call, ensuring that the volume creation utilizes the specified data engine.
106-106
: Proper use of '@pytest.mark.v2_volume_test' decorator
The addition of the '@pytest.mark.v2_volume_test' decorator to the test functions effectively categorizes them for version 2 volume testing.
Also applies to: 147-147, 199-199, 645-645, 684-684, 802-802, 925-925
manager/integration/tests/test_system_backup_restore.py (3)
38-38
: Import of 'DATA_ENGINE' is appropriate
The import statement correctly brings in 'DATA_ENGINE' from 'common'.
48-48
: Appropriate addition of '@pytest.mark.v2_volume_test' decorator
The test functions are correctly marked with the '@pytest.mark.v2_volume_test' decorator to include them in version 2 volume testing.
Also applies to: 76-76, 210-210, 267-267, 328-328, 360-360
304-305
: Inclusion of 'dataEngine=DATA_ENGINE' in volume creation
The 'dataEngine' parameter is correctly added to the 'client.create_volume' function call, specifying the data engine during volume creation.
manager/integration/tests/test_csi_snapshotter.py (1)
42-44
: Import statements for storage class utilities are correct
The import of 'storage_class', 'create_storage_class', and 'DEFAULT_STORAGECLASS_NAME' from 'common' is appropriate.
manager/integration/tests/test_recurring_job.py (3)
84-84
: LGTM!
The DATA_ENGINE import is correctly placed with other imports from the common module and follows the import organization pattern.
235-236
: LGTM! Consistent parameter addition
The dataEngine parameter has been consistently added to all volume creation calls, maintaining uniformity across the test cases.
Also applies to: 329-330, 494-495, 548-549
158-158
: LGTM! Consistent test categorization
The @pytest.mark.v2_volume_test
marker is appropriately and consistently applied to test cases that verify v2 volume functionality.
Also applies to: 290-290, 354-354, 382-382, 457-457
manager/integration/tests/test_scheduling.py (3)
92-92
: LGTM!
The DATA_ENGINE import is correctly placed and follows the same pattern as other test files.
1107-1108
: LGTM! Consistent parameter addition
The dataEngine parameter has been consistently added to all volume creation calls, maintaining uniformity across the test cases.
Also applies to: 2230-2231, 2318-2319, 2401-2402
150-150
: LGTM! Consistent test categorization
The @pytest.mark.v2_volume_test
marker is appropriately and consistently applied to test cases that verify v2 volume functionality.
Also applies to: 190-190, 236-236, 286-286
manager/integration/tests/test_basic.py (1)
107-107
: LGTM: Consistent addition of DATA_ENGINE parameter
The changes consistently add the DATA_ENGINE parameter to volume creation calls across all test cases, which is required for v2 volume testing. The implementation is correct and maintains consistency across the test suite.
Also applies to: 272-280, 1425-1426, 1543-1544, 1547-1548, 1551-1552, 1062-1063, 1066-1067, 2593-2594, 2597-2598, 2601-2602, 3243-3244, 3341-3341, 3471-3472, 3927-3928
yq e -i 'select(.spec.containers[0].env != null).spec.containers[0].env += {"name": "RUN_V2_TEST", "value": "'${TF_VAR_extra_block_device}'"}' "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use of undefined variable 'TF_VAR_extra_block_device'
The variable 'TF_VAR_extra_block_device' is referenced but not assigned before use in this section as well.
Assign 'TF_VAR_extra_block_device' before using it to prevent potential runtime errors.
Yes, they're enabled and set in terraform scripts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
manager/integration/tests/common.py (2)
326-332
: Consider using a ternary operator for better readabilityThe data engine version determination can be simplified using a ternary operator.
-enable_v2 = os.environ.get('RUN_V2_TEST') -if enable_v2 == "true": - DATA_ENGINE = "v2" -else: - DATA_ENGINE = "v1" +enable_v2 = os.environ.get('RUN_V2_TEST') +DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"🧰 Tools
🪛 Ruff (0.8.2)
328-331: Use ternary operator
DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead ofif
-else
-blockReplace
if
-else
-block withDATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
3615-3622
: Consider more descriptive variable names for disk configurationThe variable names could be more descriptive to better indicate their purpose in configuring disk paths and types for different data engines.
- default_disk_path = BLOCK_DEV_PATH - disk_type = "block" + engine_disk_path = BLOCK_DEV_PATH + engine_disk_type = "block"manager/integration/tests/test_basic.py (1)
5352-5353
: Simplify unnecessary ternary operationThe ternary operation can be simplified since it's already returning a boolean value.
-migratable=True if access_mode == "rwx" else False, +migratable=access_mode == "rwx",🧰 Tools
🪛 Ruff (0.8.2)
5352-5352: Remove unnecessary
True if ... else False
Remove unnecessary
True if ... else False
(SIM210)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
manager/integration/pytest.ini
(1 hunks)manager/integration/tests/common.py
(16 hunks)manager/integration/tests/test_basic.py
(63 hunks)manager/integration/tests/test_csi.py
(10 hunks)manager/integration/tests/test_csi_snapshotter.py
(17 hunks)manager/integration/tests/test_recurring_job.py
(43 hunks)manager/integration/tests/test_scheduling.py
(26 hunks)manager/integration/tests/test_system_backup_restore.py
(8 hunks)test_framework/Jenkinsfile
(1 hunks)test_framework/scripts/longhorn-setup.sh
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- manager/integration/pytest.ini
- test_framework/Jenkinsfile
🧰 Additional context used
🪛 Ruff (0.8.2)
manager/integration/tests/common.py
328-331: Use ternary operator DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead of if
-else
-block
Replace if
-else
-block with DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
manager/integration/tests/test_basic.py
5352-5352: Remove unnecessary True if ... else False
Remove unnecessary True if ... else False
(SIM210)
🪛 Shellcheck (0.10.0)
test_framework/scripts/longhorn-setup.sh
[warning] 454-454: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
🔇 Additional comments (22)
test_framework/scripts/longhorn-setup.sh (1)
527-529
: Add validation for TF_VAR_extra_block_device variable.
The script uses TF_VAR_extra_block_device
without ensuring it's defined, which could cause issues during script execution.
manager/integration/tests/test_system_backup_restore.py (2)
38-38
: LGTM: DATA_ENGINE parameter integration
The DATA_ENGINE parameter is correctly integrated into volume creation calls, maintaining consistency with the v2 volume testing framework.
Also applies to: 304-305
48-48
: LGTM: Test categorization with v2_volume_test marker
The v2_volume_test marker is appropriately added to relevant test functions, properly categorizing them for v2 volume testing.
Also applies to: 76-76, 210-210, 267-267, 328-328, 360-360, 415-415, 555-555, 659-659, 770-770, 1023-1023, 1120-1120, 1175-1175
manager/integration/tests/test_csi.py (2)
40-40
: LGTM: DATA_ENGINE parameter integration in CSI volume creation
The DATA_ENGINE parameter is correctly integrated into the CSI volume creation workflow through the create_pv_storage function.
Also applies to: 59-60
106-106
: LGTM: Test categorization with v2_volume_test marker
The v2_volume_test marker is appropriately added to CSI-related test functions, properly categorizing them for v2 volume testing.
Also applies to: 147-147, 199-199, 254-254, 645-645, 684-684, 802-802, 925-925
manager/integration/tests/test_csi_snapshotter.py (3)
42-44
: LGTM: Enhanced storage class imports
Added necessary imports for improved storage class handling in CSI snapshot operations.
363-364
: LGTM: Improved storage class integration in restore_csi_volume_snapshot
The function now properly creates storage classes and uses the default storage class name, enhancing the robustness of the CSI snapshot restoration process.
Also applies to: 380-380
415-415
: LGTM: Test categorization with v2_volume_test marker
The v2_volume_test marker is appropriately added to CSI snapshotter test functions, properly categorizing them for v2 volume testing.
Also applies to: 555-555, 659-659, 770-770, 1023-1023, 1120-1120, 1175-1175
manager/integration/tests/test_recurring_job.py (7)
84-84
: LGTM: DATA_ENGINE parameter correctly integrated
The addition of the DATA_ENGINE parameter to volume creation is consistent with the PR objectives to support v2 volumes.
Also applies to: 235-236
329-330
: LGTM: Consistent DATA_ENGINE parameter usage
The DATA_ENGINE parameter is correctly added to volume creation, maintaining consistency across test cases.
1023-1026
: LGTM: DATA_ENGINE parameter correctly added to multiple volume creations
The DATA_ENGINE parameter is consistently applied to both volume creation calls within the test.
1106-1107
: LGTM: DATA_ENGINE parameter correctly integrated
The DATA_ENGINE parameter is properly added to the volume creation call.
1169-1170
: LGTM: DATA_ENGINE parameter correctly integrated
The DATA_ENGINE parameter is properly added to the volume creation call.
1323-1324
: LGTM: DATA_ENGINE parameter correctly integrated
The DATA_ENGINE parameter is properly added to the volume creation call.
2081-2082
: LGTM: DATA_ENGINE parameter consistently applied
The DATA_ENGINE parameter is correctly added to all volume creation calls in the test function.
Also applies to: 2116-2117, 2127-2128
manager/integration/tests/test_scheduling.py (5)
1527-1528
: LGTM: DATA_ENGINE parameter correctly integrated
The DATA_ENGINE parameter is properly added to volume creation calls in the test function.
Also applies to: 1600-1601
1107-1108
: LGTM: DATA_ENGINE parameter correctly integrated
The DATA_ENGINE parameter is properly added to the volume creation call.
2098-2099
: LGTM: DATA_ENGINE parameter correctly integrated
The DATA_ENGINE parameter is properly added to the volume creation call.
2230-2231
: LGTM: DATA_ENGINE parameter correctly integrated
The DATA_ENGINE parameter is properly added to the volume creation call.
2318-2319
: LGTM: DATA_ENGINE parameter correctly integrated
The DATA_ENGINE parameter is properly added to the volume creation call.
manager/integration/tests/common.py (1)
89-90
: LGTM! V2 data engine integration looks correct
The changes for v2 data engine support are well integrated:
- Constants are properly defined
- Storage class configuration correctly handles v2 parameters
- Volume creation properly includes data engine parameter
Also applies to: 221-221, 1580-1582, 6147-6148, 6416-6417
manager/integration/tests/test_basic.py (1)
107-107
: LGTM! Consistent v2 volume support added
The changes appropriately add v2 volume support by:
- Importing the required DATA_ENGINE constant
- Adding dataEngine parameter consistently across all volume operations
- Preserving existing test logic while enabling v2 volume testing
The modifications maintain test coverage while enabling testing of v2 volumes.
Also applies to: 272-280, 1425-1426, 1543-1544, 1547-1548, 1551-1552, 2103-2104, 2151-2152, 5420-5421, 5471-5472
## for v2 volume test | ||
yq e -i 'select(.spec.containers[0].env != null).spec.containers[0].env += {"name": "RUN_V2_TEST", "value": "'${TF_VAR_extra_block_device}'"}' "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation for TF_VAR_extra_block_device variable.
The script uses TF_VAR_extra_block_device
without ensuring it's defined, which could cause issues during script execution. Consider adding validation before using the variable.
Apply this diff to add validation:
+ ## validate required variables
+ if [[ -z "${TF_VAR_extra_block_device}" ]]; then
+ echo "Error: TF_VAR_extra_block_device is not set"
+ exit 1
+ fi
+
## for v2 volume test
yq e -i 'select(.spec.containers[0].env != null).spec.containers[0].env += {"name": "RUN_V2_TEST", "value": "'${TF_VAR_extra_block_device}'"}' "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
## for v2 volume test | |
yq e -i 'select(.spec.containers[0].env != null).spec.containers[0].env += {"name": "RUN_V2_TEST", "value": "'${TF_VAR_extra_block_device}'"}' "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" | |
## validate required variables | |
if [[ -z "${TF_VAR_extra_block_device}" ]]; then | |
echo "Error: TF_VAR_extra_block_device is not set" | |
exit 1 | |
fi | |
## for v2 volume test | |
yq e -i 'select(.spec.containers[0].env != null).spec.containers[0].env += {"name": "RUN_V2_TEST", "value": "'${TF_VAR_extra_block_device}'"}' "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" |
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 454-454: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
- test_basic.py - test_recurring_job.py - test_scheduling.py - test_sysetm_backup_restore.py - test_csi.py - test_csi_snapshotter.py - test_migration.py longhorn/longhorn-9760 Signed-off-by: Chris <[email protected]>
2f42d4d
to
c467597
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
manager/integration/tests/test_csi.py (1)
Line range hint
684-710
: Incomplete Test in 'test_xfs_pv_existing_volume'The test mentions a FIXME note indicating that data should be written before detaching and then verified after reattaching to ensure the filesystem is not reformatted.
Complete the test by adding data write and verification steps:
# After formatting to XFS test_data = generate_random_data(VOLUME_RWTEST_SIZE) with open(get_volume_endpoint(volume), 'w') as f: f.write(test_data) # Detach and reattach as per existing steps # After pod is running resp = read_volume_data(core_api, pod_manifest['metadata']['name']) assert resp == test_data
♻️ Duplicate comments (1)
test_framework/scripts/longhorn-setup.sh (1)
454-454
:⚠️ Potential issueUse of Undefined Variable 'TF_VAR_extra_block_device'
The variable
TF_VAR_extra_block_device
is referenced but not assigned before use, which may cause issues during script execution.Ensure that
TF_VAR_extra_block_device
is assigned a value before it's used. This can be done by defining it at the beginning of the script or passing it as an environment variable.Also applies to lines 528.
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 454-454: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
🧹 Nitpick comments (7)
manager/integration/tests/test_system_backup_restore.py (2)
Line range hint
210-263
: Missing Volume Cleanup in Test CasesIn the test functions
test_system_backup_with_volume_backup_policy_if_not_present
,test_system_backup_with_volume_backup_policy_always
, andtest_system_backup_with_volume_backup_policy_disabled
, the created volumes are not cleaned up after the tests. This may lead to leftover resources that could affect subsequent tests.Consider adding cleanup steps to delete the volumes after the tests complete. This can be achieved by including a
finally
block or using a fixture with teardown capabilities.try: # Test steps finally: cleanup_volume(client, volume)
Line range hint
360-388
: Hardcoded Sleep May Cause Flaky TestsIn the test
test_system_backup_delete_when_other_system_backup_using_name_as_prefix
, there is a hardcodedtime.sleep(60)
after deleting the system backup. Relying on sleep statements may lead to flaky tests due to timing issues.Consider replacing the sleep with a polling mechanism that checks the system backups until the expected condition is met or a timeout occurs.
# Replace time.sleep(60) for _ in range(RETRY_COUNTS): system_backups = client.list_system_backup() if len(system_backups) == 2: break time.sleep(RETRY_INTERVAL) else: assert False, "Timed out waiting for system backups to update"manager/integration/tests/test_csi.py (2)
Line range hint
645-682
: Missing Data Validation After Pod CreationIn the test
test_xfs_pv
, after creating the pod and writing data, there is no verification to ensure that the filesystem is indeed XFS and that the data persists correctly.Consider adding a step to verify the filesystem type and validate data persistence:
# Verify filesystem type is XFS command = "df -T /data | tail -n 1 | awk '{print $2}'" fs_type = exec_command_in_pod(core_api, command, pod_name, 'default').strip() assert fs_type == "xfs", "Filesystem is not XFS" # Data validation resp = read_volume_data(core_api, pod_name) assert resp == test_data
Line range hint
925-958
: Incorrect Volume Size Handling in 'test_csi_minimal_volume_size'When creating a PVC requesting size 5MiB, the test expects the PVC capacity to be 10MiB due to minimum volume size constraints. However, the test uses hardcoded strings, which might not be accurate.
Consider using constants and ensuring that size comparisons are accurate:
LOW_STORAGE = 5 * Mi MIN_STORAGE = 10 * Mi # ... assert claim.spec.resources.requests['storage'] == str(LOW_STORAGE) assert claim.status.capacity['storage'] == str(MIN_STORAGE)Also, ensure that the storage size units are consistent and correctly converted to strings for comparison.
manager/integration/tests/test_csi_snapshotter.py (1)
Line range hint
363-380
: LGTM! Consider adding parameter validation.The changes to include storage_class parameter and use DEFAULT_STORAGECLASS_NAME improve flexibility. Consider adding validation for the storage_class parameter to ensure it's not None.
def restore_csi_volume_snapshot(core_api, client, csivolsnap, pvc_name, pvc_request_storage_size, storage_class, wait_for_restore=True): + if not storage_class: + raise ValueError("storage_class parameter is required") create_storage_class(storage_class)manager/integration/tests/common.py (1)
326-332
: Simplify DATA_ENGINE configuration using ternary operatorThe code can be more concise by using a ternary operator.
-enable_v2 = os.environ.get('RUN_V2_TEST') -if enable_v2 == "true": - DATA_ENGINE = "v2" -else: - DATA_ENGINE = "v1" +enable_v2 = os.environ.get('RUN_V2_TEST') +DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"🧰 Tools
🪛 Ruff (0.8.2)
328-331: Use ternary operator
DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead ofif
-else
-blockReplace
if
-else
-block withDATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
manager/integration/tests/test_basic.py (1)
5352-5352
: Simplify boolean expressionThe boolean expression can be simplified by removing the unnecessary ternary operation.
- migratable=True if access_mode == "rwx" else False, + migratable=access_mode == "rwx",🧰 Tools
🪛 Ruff (0.8.2)
5352-5352: Remove unnecessary
True if ... else False
Remove unnecessary
True if ... else False
(SIM210)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
manager/integration/pytest.ini
(1 hunks)manager/integration/tests/common.py
(16 hunks)manager/integration/tests/test_basic.py
(63 hunks)manager/integration/tests/test_csi.py
(10 hunks)manager/integration/tests/test_csi_snapshotter.py
(17 hunks)manager/integration/tests/test_migration.py
(9 hunks)manager/integration/tests/test_recurring_job.py
(43 hunks)manager/integration/tests/test_scheduling.py
(26 hunks)manager/integration/tests/test_system_backup_restore.py
(8 hunks)test_framework/Jenkinsfile
(1 hunks)test_framework/scripts/longhorn-setup.sh
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- manager/integration/pytest.ini
- test_framework/Jenkinsfile
🧰 Additional context used
🪛 Ruff (0.8.2)
manager/integration/tests/common.py
328-331: Use ternary operator DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
instead of if
-else
-block
Replace if
-else
-block with DATA_ENGINE = "v2" if enable_v2 == "true" else "v1"
(SIM108)
manager/integration/tests/test_basic.py
5352-5352: Remove unnecessary True if ... else False
Remove unnecessary True if ... else False
(SIM210)
🪛 Shellcheck (0.10.0)
test_framework/scripts/longhorn-setup.sh
[warning] 454-454: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
🔇 Additional comments (13)
manager/integration/tests/test_migration.py (1)
208-209
: Ensure Consistent Replica Counts for DATA_ENGINE 'v1'
When DATA_ENGINE
is set to "v1"
, the assertion verifies that the number of old_replicas
is equal to the number of new_replicas
. Please confirm that this condition accurately represents the expected behavior for the v1 data engine.
If necessary, run the following script to check the consistency of replica counts:
✅ Verification successful
Let me gather more context about the test file to understand the full implementation.
Let me check for the new_replicas
initialization and the full context around the assertion.
Let me check the DATA_ENGINE constant definition to understand its role in the test configuration.
The assertion is correct for DATA_ENGINE 'v1' migration behavior
The verification confirms that the assertion is valid based on the following findings:
- The test is part of a volume migration test suite where replicas are tracked before and after migration.
- For DATA_ENGINE "v1", the code consistently enforces replica count equality throughout the migration process:
new_replicas
only includes replicas that weren't inold_replicas
- The assertion
len(old_replicas) == len(new_replicas)
ensures the migration creates the same number of new replicas as there were old ones - This behavior is specifically tied to v1 data engine, with v2 having different expectations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the number of old replicas equals new replicas for DATA_ENGINE 'v1'
# Retrieve the old and new replica counts
echo "Old Replicas Count:"
grep -A 5 'old_replicas =' manager/integration/tests/test_migration.py | grep 'append' | wc -l
echo "New Replicas Count:"
grep -A 5 'new_replicas =' manager/integration/tests/test_migration.py | grep 'append' | wc -l
Length of output: 291
Script:
#!/bin/bash
# Get the full context around the replica initialization and assertion
echo "=== Full context of test_migration.py ==="
rg -B 10 -A 10 "old_replicas = \[\]" manager/integration/tests/test_migration.py
Length of output: 1861
Script:
#!/bin/bash
# Get the context around new_replicas and the assertion
echo "=== Context around new_replicas and assertion ==="
rg -B 10 -A 10 "new_replicas = \[\]" manager/integration/tests/test_migration.py
echo -e "\n=== Context around the DATA_ENGINE assertion ==="
rg -B 10 -A 10 "if DATA_ENGINE == \"v1\":" manager/integration/tests/test_migration.py
Length of output: 2705
Script:
#!/bin/bash
# Find where DATA_ENGINE is defined
echo "=== DATA_ENGINE definition ==="
rg "DATA_ENGINE =" manager/integration/tests/
# Check if there's a test configuration file
echo -e "\n=== Test configuration files ==="
fd -e py -e ini -e cfg -e conf -e yaml -e yml -E "node_modules|dist" "test.*conf" manager/integration/
Length of output: 932
test_framework/scripts/longhorn-setup.sh (1)
453-455
: Add Validation for 'TF_VAR_extra_block_device' Variable
The script uses TF_VAR_extra_block_device
without ensuring it's defined. This could lead to runtime errors if the variable is unset.
Apply this diff to add validation:
+ ## Validate required variables
+ if [[ -z "${TF_VAR_extra_block_device}" ]]; then
+ echo "Error: TF_VAR_extra_block_device is not set"
+ exit 1
+ fi
## For v2 volume test
yq e -i 'select(.spec.containers[0].env != null).spec.containers[0].env += {"name": "RUN_V2_TEST", "value": "'${TF_VAR_extra_block_device}'"}' "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}"
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 454-454: TF_VAR_extra_block_device is referenced but not assigned.
(SC2154)
manager/integration/tests/test_csi_snapshotter.py (2)
Line range hint 415-428
: LGTM! Test coverage enhanced.
The addition of v2_volume_test marker and storage_class parameter properly integrates the new DATA_ENGINE functionality. The csi_snapshot_type parameter improves test coverage by allowing different snapshot types to be tested.
Line range hint 555-568
: LGTM! Test coverage enhanced.
The addition of v2_volume_test marker and storage_class parameter properly integrates the new DATA_ENGINE functionality. The backup_is_deleted parameter improves test coverage by verifying backup deletion behavior.
manager/integration/tests/test_recurring_job.py (2)
235-236
: LGTM! Proper integration of DATA_ENGINE.
The addition of DATA_ENGINE parameter ensures proper volume creation with the new engine.
329-330
: LGTM! Proper integration of DATA_ENGINE.
The addition of DATA_ENGINE parameter ensures proper volume creation with the new engine.
manager/integration/tests/test_scheduling.py (2)
1107-1108
: LGTM! Proper integration of DATA_ENGINE.
The addition of DATA_ENGINE parameter ensures proper volume creation with the new engine.
2098-2099
: LGTM! Proper integration of DATA_ENGINE.
The addition of DATA_ENGINE parameter ensures proper volume creation with the new engine.
manager/integration/tests/common.py (2)
89-90
: LGTM! V2 data engine integration changes look consistent
The changes to support v2 data engine across volume creation, backup, storage class and deployment functions are consistent and well-integrated.
Also applies to: 221-221, 441-442, 561-561, 1580-1582, 6147-6148, 6416-6417
Line range hint 3615-3664
: Verify disk path and type configuration for v2 data engine
The changes for v2 data engine support in disk configuration look correct. However, let's verify the disk paths are properly detected and configured.
✅ Verification successful
Disk path configuration for v2 data engine is correctly implemented
The code correctly handles disk path configuration based on the RUN_V2_TEST
environment variable:
- For v2 data engine (
RUN_V2_TEST=true
): Uses/dev/xvdh
as block device - For v1 data engine: Uses
/var/lib/longhorn/
as filesystem path
The paths are consistently used across the codebase, particularly in test files and node management code. While the sandbox environment doesn't have these paths available, this is expected as it's just a test environment. The actual paths will be properly configured in the production environment.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify disk paths and types are properly configured
# Check if block device exists when v2 is enabled
if [ "$RUN_V2_TEST" = "true" ]; then
if [ ! -b "/dev/xvdh" ]; then
echo "Warning: Block device /dev/xvdh not found but required for v2 data engine"
exit 1
fi
fi
# Check filesystem path exists
if [ ! -d "/var/lib/longhorn/" ]; then
echo "Warning: Default filesystem path /var/lib/longhorn/ not found"
exit 1
fi
Length of output: 197
Script:
#!/bin/bash
# Let's verify the constants and paths defined in the codebase
rg "BLOCK_DEV_PATH|DEFAULT_DISK_PATH" -B2 -A2
# Also check for any other disk path configurations
ast-grep --pattern 'path = "$_"'
Length of output: 17828
manager/integration/tests/test_basic.py (3)
107-107
: LGTM: Import addition for v2 volume support
The addition of DATA_ENGINE import is appropriate for enabling v2 volume testing.
272-276
: LGTM: Volume creation parameter addition
The dataEngine parameter has been correctly added to volume creation calls while maintaining the original test logic.
Also applies to: 279-280
1425-1426
: LGTM: Consistent dataEngine parameter addition
The dataEngine parameter has been consistently added to all volume creation calls throughout the test file. This change properly enables v2 volume testing while preserving the original test functionality.
Also applies to: 1543-1544, 1547-1548, 1551-1552, 1925-1926, 2043-2044, 2103-2104, 2151-2152, 3243-3244, 3341-3341, 3471-3472, 2593-2594, 2597-2598, 2601-2602, 3809-3810, 3927-3928, 5367-5368, 5378-5379, 5420-5421
if DATA_ENGINE == "v1": | ||
replica_cnt = len(new_replicas) | ||
elif DATA_ENGINE == "v2": | ||
replica_cnt = len(old_replicas) | ||
volume = common.wait_for_volume_migration_node( | ||
client, volume_name, local_node, | ||
expected_replica_count=len(new_replicas) | ||
expected_replica_count=replica_cnt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replica Count Determination May Cause Inconsistencies
The logic for determining replica_cnt
based on DATA_ENGINE
may lead to inconsistencies:
if DATA_ENGINE == "v1":
replica_cnt = len(new_replicas)
elif DATA_ENGINE == "v2":
replica_cnt = len(old_replicas)
This could result in unexpected replica counts during migration. Ensure that the assigned replica_cnt
aligns with the actual number of replicas expected for each data engine version.
Consider simplifying the logic or providing clear documentation on why the replica_cnt
differs between v1
and v2
. If the expected replica count should always match a specific value, adjust the code accordingly.
if DATA_ENGINE == "v1": | ||
assert len(old_replicas) == (len(new_replicas) - len(old_replicas)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential Logical Error in Replica Count Assertion
The assertion len(old_replicas) == (len(new_replicas) - len(old_replicas))
may not produce the expected result. This assertion simplifies to len(old_replicas) == len(new_replicas) - len(old_replicas)
, which implies 2 * len(old_replicas) == len(new_replicas)
. This suggests that the number of new replicas should be twice the number of old replicas, which may not be accurate in this context.
Consider revising the assertion to correctly reflect the intended relationship between old_replicas
and new_replicas
. If the expectation is that the total number of replicas remains the same, the assertion should be:
- assert len(old_replicas) == (len(new_replicas) - len(old_replicas))
+ assert len(new_replicas) == len(old_replicas)
Committable suggestion skipped: line range outside the PR's diff.
Which issue(s) this PR fixes:
Issue #9760
What this PR does / why we need it:
Make test cases in below files can run on v2 volume
Special notes for your reviewer:
Enable
RUN_V2_TEST
jenkins and use-m v2_volume_test
to run test case with block diskAdditional documentation or context
Summary by CodeRabbit
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
These updates significantly improve the flexibility and reliability of volume management within the system.